1,068 research outputs found

    Allocating the Sample Size in Phase II and III Trials to Optimize Success Probability

    Get PDF
    Background Clinical trials of phase II and III often fail due to poor experimental planning. Here, the problem of allocating available resources, in terms of sample size, to phase II and phase III is studied with the aim of increasing success rate. The overall success probability (OSP) is accounted for. Methods Focus is placed on the amount of resources that should be provided to phase II and III trials to attain a good level of OSP, and on how many of these resources should be allocated to phase II to optimize OSP. It is assumed that phase II data are not considered for confirmatory purposes and that are used for planning phase III through sample size estimation. Being rr the rate of resources allocated to phase II, OSP(r)OSP(r) is a concave function and there exists an optimal allocation roptr_{opt} giving max{OSP}\max\{OSP\}. If MIM_I is the sample size giving the desired power to phase III, and kMIkM_I is the whole sample size that can be allocated to the two phases, it is indicated how large kk and rr should be in order to achieve levels of OSP of practical interest. Results For example, when 5 doses are evaluated in phase II and 2 parallel phase III confirmatory trials (one-tail type I error =2.5%=2.5\%, power =90%=90\%) are considered with 2 groups each, k=24k=24 is needed to obtain OSP75%OSP\simeq 75\%, with ropt50%r_{opt}\simeq 50\%. The choice of kk depends mainly on how many phase II treatment groups are considered, not on the effect size of the selected dose. When kk is large enough, roptr_{opt} is close to 50%50\%. An r25%r\simeq25\%, although not best, might give a good OSP and an invitingly small total sample size, provided that kk is large enough. Conclusions To improve the success rate of phase II and phase III trials, the drug development could be looked at in its entirety. Resources larger than those usually employed should be allocated to phase II to increase OSP. Phase II allocation rate may be increased to, at least, 25\%, provided that a sufficient global amount of resources is available

    Phase III Failures for a Lack of Efficacy can be, in Significant Part, Recovered (Introducing Success Probability Estimation Quantitatively)

    Get PDF
    The rate of phase III trials failures is approximately 42-45%, and most of them are due to a lack of efficacy. Some of the failures for a lack of efficacy are expected, due to type I errors in phase II and type II errors in phase III. However, the rate of these failures is far from saturating the global failure rate due to a lack of efficacy.In this work, the probability of unexpected failure for a lack of efficacy in phase III trials is estimated to be about 14%, with credibility interval (9%, 18%). These failures can be recovered through an adequate planning/empowering of phase II, and by adopting conservative estimation for the sample size of phase III. The software SP4CT (a free web application available at www.sp4ct.com) allows these computations. This 14% rate of unexpected failures gives that every year approximately 270,000 patients uselessly undergo a phase III trial with a large damage in individual ethics; moreover, the unavailability of many effective treatments is a considerable damage for collective ethics. The 14% of unexpected failures also produces more than $11bn of pure waste, and generates a much higher lack of revenue given by drugs’ marketing

    Reproducibility Probability Estimation and RP-Testing for Some Nonparametric Tests

    Get PDF
    Several reproducibility probability (RP)-estimators for the binomial, sign, Wilcoxon signed rank and Kendall tests are studied. Their behavior in terms of MSE is investigated, as well as their performances for RP-testing. Two classes of estimators are considered: the semi-parametric one, where RP-estimators are derived from the expression of the exact or approximated power function, and the non-parametric one, whose RP-estimators are obtained on the basis of the nonparametric plug-in principle. In order to evaluate the precision of RP-estimators for each test, the MSE is computed, and the best overall estimator turns out to belong to the semi-parametric class. Then, in order to evaluate the RP-testing performances provided by RP estimators for each test, the disagreement between the RP-testing decision rule, i.e., "accept H0 if the RP-estimate is lower than, or equal to, 1/2, and reject H0 otherwise", and the classical one (based on the critical value or on the p-value) is obtained. It is shown that the RP-based testing decision for some semi-parametric RP estimators exactly replicates the classical one. In many situations, the RP-estimator replicating the classical decision rule also provides the best MSE

    Unsupervised Place Recognition with Deep Embedding Learning over Radar Videos

    Full text link
    We learn, in an unsupervised way, an embedding from sequences of radar images that is suitable for solving place recognition problem using complex radar data. We experiment on 280 km of data and show performance exceeding state-of-the-art supervised approaches, localising correctly 98.38% of the time when using just the nearest database candidate.Comment: to be presented at the Workshop on Radar Perception for All-Weather Autonomy at the IEEE International Conference on Robotics and Automation (ICRA) 202

    Semantic Interpretation and Validation of Graph Attention-based Explanations for GNN Models

    Full text link
    In this work, we propose a methodology for investigating the use of semantic attention to enhance the explainability of Graph Neural Network (GNN)-based models. Graph Deep Learning (GDL) has emerged as a promising field for tasks like scene interpretation, leveraging flexible graph structures to concisely describe complex features and relationships. As traditional explainability methods used in eXplainable AI (XAI) cannot be directly applied to such structures, graph-specific approaches are introduced. Attention has been previously employed to estimate the importance of input features in GDL, however, the fidelity of this method in generating accurate and consistent explanations has been questioned. To evaluate the validity of using attention weights as feature importance indicators, we introduce semantically-informed perturbations and correlate predicted attention weights with the accuracy of the model. Our work extends existing attention-based graph explainability methods by analysing the divergence in the attention distributions in relation to semantically sorted feature sets and the behaviour of a GNN model, efficiently estimating feature importance. We apply our methodology on a lidar pointcloud estimation model successfully identifying key semantic classes that contribute to enhanced performance, effectively generating reliable post-hoc semantic explanations.Comment: International Conference on Advanced Robotics (ICAR 2023

    Point-based metric and topological localisation between lidar and overhead imagery

    Get PDF
    In this paper, we present a method for solving the localisation of a ground lidar using overhead imagery only. Public overhead imagery such as Google satellite images are readily available resources. They can be used as the map proxy for robot localisation, relaxing the requirement for a prior traversal for mapping as in traditional approaches. While prior approaches have focused on the metric localisation between range sensors and overhead imagery, our method is the first to learn both place recognition and metric localisation of a ground lidar using overhead imagery, and also outperforms prior methods on metric localisation with large initial pose offsets. To bridge the drastic domain gap between lidar data and overhead imagery, our method learns to transform an overhead image into a collection of 2D points, emulating the resulting point-cloud scanned by a lidar sensor situated near the centre of the overhead image. After both modalities are expressed as point sets, point-based machine learning methods for localisation are applied

    Contextual, Optimal and Universal Realization of the Quantum Cloning Machine and of the NOT gate

    Full text link
    A simultaneous realization of the Universal Optimal Quantum Cloning Machine (UOQCM) and of the Universal-NOT gate by a quantum injected optical parametric amplification (QIOPA), is reported. The two processes, forbidden in their exact form for fundamental quantum limitations, are found universal and optimal, and the measured fidelity F<1 is found close to the limit values evaluated by quantum theory. This work may enlighten the yet little explored interconnections of fundamental axiomatic properties within the deep structure of quantum mechanics.Comment: 10 pages, 2 figure

    The Oxford Road Boundaries Dataset

    Full text link
    In this paper we present the Oxford Road Boundaries Dataset, designed for training and testing machine-learning-based road-boundary detection and inference approaches. We have hand-annotated two of the 10 km-long forays from the Oxford Robotcar Dataset and generated from other forays several thousand further examples with semi-annotated road-boundary masks. To boost the number of training samples in this way, we used a vision-based localiser to project labels from the annotated datasets to other traversals at different times and weather conditions. As a result, we release 62605 labelled samples, of which 47639 samples are curated. Each of these samples contains both raw and classified masks for left and right lenses. Our data contains images from a diverse set of scenarios such as straight roads, parked cars, junctions, etc. Files for download and tools for manipulating the labelled data are available at: oxford-robotics-institute.github.io/road-boundaries-datasetComment: Accepted for publication at the workshop "3D-DLAD: 3D-Deep Learning for Autonomous Driving" (WS15), Intelligent Vehicles Symposium (IV 2021

    Robot-Relay : Building-Wide, Calibration-Less Visual Servoing with Learned Sensor Handover Network

    Full text link
    We present a system which grows and manages a network of remote viewpoints during the natural installation cycle for a newly installed camera network or a newly deployed robot fleet. No explicit notion of camera position or orientation is required, neither global - i.e. relative to a building plan - nor local - i.e. relative to an interesting point in a room. Furthermore, no metric relationship between viewpoints is required. Instead, we leverage our prior work in effective remote control without extrinsic or intrinsic calibration and extend it to the multi-camera setting. In this, we memorise, from simultaneous robot detections in the tracker thread, soft pixel-wise topological connections between viewpoints. We demonstrate our system with repeated autonomous traversals of workspaces connected by a network of six cameras across a productive office environment.Comment: Paper accepted to the 18th International Symposium on Experimental Robotics (ISER 2023

    Doppler-aware Odometry from FMCW Scanning Radar

    Full text link
    This work explores Doppler information from a millimetre-Wave (mm-W) Frequency-Modulated Continuous-Wave (FMCW) scanning radar to make odometry estimation more robust and accurate. Firstly, doppler information is added to the scan masking process to enhance correlative scan matching. Secondly, we train a Neural Network (NN) for regressing forward velocity directly from a single radar scan; we fuse this estimate with the correlative scan matching estimate and show improved robustness to bad estimates caused by challenging environment geometries, e.g. narrow tunnels. We test our method with a novel custom dataset which is released with this work at https://ori.ox.ac.uk/publications/datasets.Comment: Accepted to ITSC 202
    corecore